AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
RoBERTa Text Encoding

# RoBERTa Text Encoding

Taiyi CLIP RoBERTa 326M ViT H Chinese
Apache-2.0
The first open-source Chinese CLIP model, pre-trained on 123 million image-text pairs, with RoBERTa-large architecture as the text encoder.
Text-to-Image Transformers Chinese
T
IDEA-CCNL
108
10
Taiyi CLIP Roberta Large 326M Chinese
Apache-2.0
The first open-source Chinese CLIP model, pre-trained on 123 million image-text pairs, supporting Chinese image-text feature extraction and zero-shot classification
Text-to-Image Transformers Chinese
T
IDEA-CCNL
10.37k
39
Taiyi CLIP Roberta 102M Chinese
Apache-2.0
The first open-source Chinese CLIP model, pre-trained on 123 million image-text pairs, with a text encoder based on RoBERTa-base architecture.
Text-to-Image Transformers Chinese
T
IDEA-CCNL
558
51
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase